13 research outputs found

    Confounding-Robust Policy Improvement with Human-AI Teams

    Full text link
    Human-AI collaboration has the potential to transform various domains by leveraging the complementary strengths of human experts and Artificial Intelligence (AI) systems. However, unobserved confounding can undermine the effectiveness of this collaboration, leading to biased and unreliable outcomes. In this paper, we propose a novel solution to address unobserved confounding in human-AI collaboration by employing the marginal sensitivity model (MSM). Our approach combines domain expertise with AI-driven statistical modeling to account for potential confounders that may otherwise remain hidden. We present a deferral collaboration framework for incorporating the MSM into policy learning from observational data, enabling the system to control for the influence of unobserved confounding factors. In addition, we propose a personalized deferral collaboration system to leverage the diverse expertise of different human decision-makers. By adjusting for potential biases, our proposed solution enhances the robustness and reliability of collaborative outcomes. The empirical and theoretical analyses demonstrate the efficacy of our approach in mitigating unobserved confounding and improving the overall performance of human-AI collaborations.Comment: 24 page

    Nonparametric Discrete Choice Experiments with Machine Learning Guided Adaptive Design

    Full text link
    Designing products to meet consumers' preferences is essential for a business's success. We propose the Gradient-based Survey (GBS), a discrete choice experiment for multiattribute product design. The experiment elicits consumer preferences through a sequence of paired comparisons for partial profiles. GBS adaptively constructs paired comparison questions based on the respondents' previous choices. Unlike the traditional random utility maximization paradigm, GBS is robust to model misspecification by not requiring a parametric utility model. Cross-pollinating the machine learning and experiment design, GBS is scalable to products with hundreds of attributes and can design personalized products for heterogeneous consumers. We demonstrate the advantage of GBS in accuracy and sample efficiency compared to the existing parametric and nonparametric methods in simulations

    Robust Conditional GAN from Uncertainty-Aware Pairwise Comparisons

    Full text link
    Conditional generative adversarial networks have shown exceptional generation performance over the past few years. However, they require large numbers of annotations. To address this problem, we propose a novel generative adversarial network utilizing weak supervision in the form of pairwise comparisons (PC-GAN) for image attribute editing. In the light of Bayesian uncertainty estimation and noise-tolerant adversarial training, PC-GAN can estimate attribute rating efficiently and demonstrate robust performance in noise resistance. Through extensive experiments, we show both qualitatively and quantitatively that PC-GAN performs comparably with fully-supervised methods and outperforms unsupervised baselines.Comment: Accepted for spotlight at AAAI-2

    Improving Negative-Prompt Inversion via Proximal Guidance

    Full text link
    DDIM inversion has revealed the remarkable potential of real image editing within diffusion-based methods. However, the accuracy of DDIM reconstruction degrades as larger classifier-free guidance (CFG) scales being used for enhanced editing. Null-text inversion (NTI) optimizes null embeddings to align the reconstruction and inversion trajectories with larger CFG scales, enabling real image editing with cross-attention control. Negative-prompt inversion (NPI) further offers a training-free closed-form solution of NTI. However, it may introduce artifacts and is still constrained by DDIM reconstruction quality. To overcome these limitations, we propose Proximal Negative-Prompt Inversion (ProxNPI), extending the concepts of NTI and NPI. We enhance NPI with a regularization term and reconstruction guidance, which reduces artifacts while capitalizing on its training-free nature. Our method provides an efficient and straightforward approach, effectively addressing real image editing tasks with minimal computational overhead.Comment: Code at https://github.com/phymhan/prompt-to-promp

    Cost-Accuracy Aware Adaptive Labeling for Active Learning

    No full text
    Conventional active learning algorithms assume a single labeler that produces noiseless label at a given, fixed cost, and aim to achieve the best generalization performance for given classifier under a budget constraint. However, in many real settings, different labelers have different labeling costs and can yield different labeling accuracies. Moreover, a given labeler may exhibit different labeling accuracies for different instances. This setting can be referred to as active learning with diverse labelers with varying costs and accuracies, and it arises in many important real settings. It is therefore beneficial to understand how to effectively trade-off between labeling accuracy for different instances, labeling costs, as well as the informativeness of training instances, so as to achieve the best generalization performance at the lowest labeling cost. In this paper, we propose a new algorithm for selecting instances, labelers (and their corresponding costs and labeling accuracies), that employs generalization bound of learning with label noise to select informative instances and labelers so as to achieve higher generalization accuracy at a lower cost. Our proposed algorithm demonstrates state-of-the-art performance on five UCI and a real crowdsourcing dataset

    Preparation and Physicochemical Properties of Biochar from the Pyrolysis of Pruning Waste of Typical Fruit Tree in North China

    Get PDF
    Routine maintenance of fruit trees generates a substantial quantity of pruning waste each year. This waste is potential feedstock for producing energy, materials, and other products. The feasibility of making biochar from the waste via pyrolysis was evaluated. The effects of seven tree species, different pruning sites, and temperature on the pyrolysis process, and the physicochemical properties of the biochar were studied. Pyrolysis of different tree species at 500 °C yielded 27.5 to 33.3% biochar, with a high calorific value (approximately 30 MJ/kg), low ash content (approximately 4%), and capturing up to 60% of the carbon element present. Simultaneously, when the temperature was increased from 400 to 700 °C, the yield of biochar decreased from 35.8% to 24.3%, but the properties improved with the higher heating value rising from 29.2 to 31.3 MJ/kg and the iodine value from 234 to 252 mg/g. The biochar has a good pore structure with a specific surface area of 237 m2/g, total pore volume of 0.175 cm3/g, and average pore size of 2.96 nm. In general, biochar from the pyrolysis of fruitwood pruning waste generated here could be an ideal feedstock to produce high-value-added products, such as solid fuels, activated carbon, and electrode materials

    Tensor framelet based iterative image reconstruction algorithm for low-dose multislice helical CT.

    No full text
    In this study, we investigate the feasibility of improving the imaging quality for low-dose multislice helical computed tomography (CT) via iterative reconstruction with tensor framelet (TF) regularization. TF based algorithm is a high-order generalization of isotropic total variation regularization. It is implemented on a GPU platform for a fast parallel algorithm of X-ray forward band backward projections, with the flying focal spot into account. The solution algorithm for image reconstruction is based on the alternating direction method of multipliers or the so-called split Bregman method. The proposed method is validated using the experimental data from a Siemens SOMATOM Definition 64-slice helical CT scanner, in comparison with FDK, the Katsevich and the total variation (TV) algorithm. To test the algorithm performance with low-dose data, ACR and Rando phantoms were scanned with different dosages and the data was equally undersampled with various factors. The proposed method is robust for the low-dose data with 25% undersampling factor. Quantitative metrics have demonstrated that the proposed algorithm achieves superior results over other existing methods
    corecore